It seems to me that most of the "raise awareness" campaigns are for things people are plenty aware of already.
Indeed. Maybe we should instead call them "signaling awareness" campaigns. I wonder how many people would still be interested in participating.
But his discussion of how to “really help the world” seemed to me to contain a number of errors[1] -- errors enough that, if he cannot sort them out somehow, his total impact won’t be nearly what it could be.
Idea: a Rationalist Counsel, basically a group of high-profile rationalists that help people who are dedicated themselves to "shaping the future of humanity in a positive way" accurately assess their abilities and offer good strategies to leverage these abilities in order to maximize the impact.
People would privately submit their resumes to the Counsel members, who will evaluate them and offer a personal strategy that, in their opinion, would maximize the impact.
Conjunctions are not inherently unlikely; they are less likely than their conjuncts considered separately, but could easily be much more likely than a different argument.
Well, yes. But they are less likely than their conjuncts in a specific and mathematical way, and we have good evidence that people don't multiply their uncertainties the way they should - it appears that they simply take the average (!!!).
Charitably, I count eight conjunctions in the presented argument. If he had on average 80% confidence in each premise (raising awareness of the free market virtues will overcome status quo bias, increase in free market in first world will translate to increase in free market in the third world - these don't feel like four-in-five-timers), then his plan, as stated, has at most a 17% chance of success. But Jim feels like he has an 80% chance.
Your response is true in a trivial way, because 17% is far higher than the chance Zeus returns, and far higher again than Zeus and Jesus returning to give each other a cosmic high-five. But we can spot those very unlikely premises - and it's only the very unlikely premises that are less likely than a long list of conjunctions. We don't think like that - we don't see our true chances.
So, if you restrict the space of premises and arguments to what humans mostly deal with in their practical lives, "conjunctions are inherently unlikely" is an excellent rule of thumb until you can sit down and do the math.
Programming does indeed have aspects in which you get lots of immediate feed back so that you can be successful with minimal rationality. But in many cases, getting that feed back is itself a skill that requires abstract reasoning (What corner cases should I be testing?). Other aspects of programming in which feedback can be slow or expensive include: How maintainable is this code? Will other programmers understand how to use my library? How will this program perform on large problems? How could an unauthorized user abuse this program?
Consider the following goals:
And construct a simple Venn diagram. Carefully working through all of Donald Knuth The Art of Computer Programming or Landau & Lifshitz Course of Theoretical Physics (10 volumes) would be a task right smack dab in the center of such a Venn diagram passing the acid test of items one through three on this list.
What generates the criticism of LessWrong as "shiny"?
I think many humans are susceptible to a trap of expending energy on #2 or on #3 ...
Indeed. Just reading The Art of Computer Programming would be a pointless task. Incidentally, this wisdom is so obvious in academia that when people say "I read X" they really mean something like "I took notes and worked out most of the exercises". When they want to convey that they just read the text, they say "I looked at X". This language definitely misled me when I was an undergrad.
The traditional alternative is to deem subjects on which one cannot gather empirical data "unscientific" subjects on which respectable people should not speak...
There are some distinctions to be made here. Cryonics obviously provides a better chance to see the future after dying than rotting six feet under. Regarding retirement investment, just ask your parents or grandparents. Yet this argument against the necessity of empirical data breaks down at some point. Shaping the Singularity is not on par with having a positive impact on the distant ...
There are some distinctions to be made here. Cryonics obviously provides a better chance to see the future after dying than rotting six feet under.
Yes, but it is less obvious that the chance is large enough to be worth the money.
Regarding retirement investment, just ask your parents or grandparents.
My example was that the type of investments that can be relied upon in coming decades is not accessible that way. For example, many in the US trusted their savings in real estate, which had been trustworthy for generations. And then it wasn’t.
Yet this argument against the necessity of empirical data breaks down at some point. Shaping the Singularity is not on par with having a positive impact on the distant poor. If you claim that predictions and falsifiability are unrelated concepts, that's fine. But to believe some predictions - e.g. a technological Singularity spawned by AGI-seeds capable of superhuman recursive self-improvement - compared to other predictions - e.g. a retirement plan for old age - is not the same.
Yes, there is a difference of degree between the difficulty of figuring out whether mortage-backed securities were as trustworthy as people thought (note tha...
Let me put down a few of my own thoughts on the subject.
I think it's odd that LessWrong spends so much time pondering whether or not it should exist! Most blogs don't do that; most communities (online or otherwise) don't do that. And if a person did that, you'd consider her rather abnormal. I view such discussion as noise; or at least a sidebar to more interesting topics.
I disagree that LW can't be useful to anyone who'd understand it. I offer my own experience as an example: it was useful to me in several ways.
LW clinched my own break with religion (particularly the essay "Belief in Belief." )
Eliezer's explanation of quantum physics is very interesting, intuitive, and as far as I know isn't replicated in any textbook.
LW introduced me to futurist topics that I simply hadn't heard of, or realized that sensible people thought about (cryonics, the Singularity).
I met a few real-life friends through LW, for whom I have a lot of respect.
Finally, as far as instrumental rationality goes, LW took the place of two other, lower-quality internet forums in my free-time budget, so I spend more time out of my day trying to be thoughtful, rather than sleazy and
I didn't think that someone who reached your level of education needed LW to break with religion.
People certainly ought not need to. By that I mean that people with the general cognitive capacity of humans have more than enough ability to evaluate religion as nonsense given even basic modern education. But even so it is damn hard to break free. Part of what makes the 'Belief in Belief' post particularly useful is that it is written in an attempt to understand what is really going on when people 'believe' things that don't make sense given what they know.
The social factors are also important. Religion is essentially about signalling tribal affiliation. It is dangerous to discard a tribal identity without already having found yourself a new tribe - even a compartmentalised online tribe used primarily for cognitive affiliation.
Nevertheless, I never tried to argue that Less Wrong is useless. It's one of my favorite places in the metaverse.
This is something I have to remind myself of when reading your comments. You are sincere. Not having known your online self at all some of your arguments and questions would seem far more rhetorical than you intended them. You actually do update on new information which is a big deal!
Squared differences is just what is involved when you are calculating things like standard deviation
Never mind that; just parse the damn phrase! All you need to know is what a "difference" is, and what "to square" means.
Why, I wonder, do people assume that words lose their individual meanings when combined, so that something like "squared differences" registers as "[unknown vocabulary item]" rather than "differences that have been squared"?
Why, I wonder, do people assume that words lose their individual meanings when combined, so that something like "squared differences" registers as "[unknown vocabulary item]" rather than "differences that have been squared"?
Because quite often sophisticated people will punish you socially if you don't take special care to pay homage to whatever extra meaning the combined phrase has taken on. Caution in such cases is a practical social move.
And that claim is what I have been inquiring about. How is an outsider going to tell if the people here are the best rationalists around? Your post just claimed this[.]
My post didn't claim Less Wrong contains the best rationalists anywhere. It claimed that for many readers, Less Wrong is the best community of aspiring rationalists that they have easy access to. I wish you would be careful to be clear about exactly what is at issue and to avoid straw man attacks.
As to how to evaluate Less Wrongers’, or others’, rationality skills: It is hard to assess others’ rationality by evaluating their opinions on a small number of controversial issues. This difficulty stems partly from the difficulty of oneself determining the right answers (so as to know whether to raise or lower one’s estimate of others with those views). And it stems in part from the fact that a small number of yes/no or multiple-choice-style opinions will provide only limited evidence, especially given communities’ tendency to copy the opinions of others within the community.
One can more easily notice what processes LWers and others follow, and one can ask whether these processes are likely to promote true beliefs....
My impression is that XiXiDu is curious and that what you're frustrated by has more to do with his difficulty expressing himself than with closed-mindedness on his part. Note that he compiled a highly upvoted list of references and resources for Less Wrong - I read this as evidence that he's interested in Less Wrong's mission and think that his comments should be read more charitably.
I'll try to recast what I think he's trying to say in clearer terms sometime over the next few days
I agree with you, actually. He does seem curious; I shouldn't have said otherwise. He just also seems drawn to the more primate-politics-prone topics within Less Wrong, and he seems further to often express himself in spaghetti-at-the-wall mixtures of true and untrue, and relevant and irrelevant statements that confuse the conversation.
Less Wrong is a community that many of us care about; and it is kind, when one is new to a community and is still learning to express oneself, to tread a little more softly than XiXiDu has been.
Though there are many brilliant people within academia, there is also shortsightedness and group-think within academia which could have led the academic establishment to ignore important issues concerning safety of advanced future technologies.
I've seen very little (if anything) in the way of careful rebuttals of SIAI's views from the academic establishment. As such, I don't think that there's strong evidence against SIAI's claims. At the same time, I have the impression that SIAI has not done enough to solicit feedback from the academic establishment.
John Baez will be posting an interview with Eliezer sometime soon. It should be informative to see the back and forth between the two of them.
Concerning the apparent group think on Less Wrong: something relevant that I've learned over the past few months is that some of the vocal SIAI supporters on LW express views that are quite unrepresentative of those of the SIAI staff. I initially misjudged SIAI on account of past unawareness of this point.
I believe that if you're going to express doubts and/or criticism about LW and/or SIAI you should take the time and energy to express these carefully and diplomatically. Expressing unc
Thanks for making this post. I especially like the paragraph:
There is similarly no easy way to use the “try it and see” method to sort out what ethics and meta-ethics to endorse, or what long-term human outcomes are likely, how you can have a positive impact on the distant poor, or which retirement investments really will be safe bets for the next forty years. For these goals we are forced to use reasoning, as failure-prone as human reasoning is. If the issue is tricky enough, we’re forced to additionally develop our skill at reasoning -- to develop “epistemic rationality”.
Thanks for your post Anna. It really got me thinking. I was going to write you a long comment here but it got too long so I made it into a new top level post.
And don't let the concern trolls get you down... remember... they totally support us!
I think a lot of the benefit I've derived from LessWrong is subconscious in nature - rational algorythms and basic Logic 101 are such a core part of the community that you wind up adopting them on a deeper level than you do just by learning about them. "If A then B, A therefore B, B =/= A" is easy to learn - 20 minutes at most for a particularly slow student, but applying it is a whole other story.
For example, an idea recently struck me during a conversation (I plan to write a more-detailed piece for my blog about this): in regards to Iraq, the ...
[safe bets](link AAA ratings?)
This and other clues lead me to believe that this post was published inadvertently.
To enrich these great points about how to get more epistemic rationality, I would suggest intentionally associating positive emotions with epistemic rationality practices.
Related to: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality
We’ve had a lot of good criticism of Less Wrong lately (including Patri’s post above, which contains a number of useful points). But to prevent those posts from confusing newcomers, this may be a good time to review what Less Wrong is useful for.
In particular: I had a conversation last Sunday with a fellow, I’ll call him Jim, who was trying to choose a career that would let him “help shape the singularity (or simply the future of humanity) in a positive way”. He was trying to sort out what was efficient, and he aimed to be careful to have goals and not roles.
So far, excellent news, right? A thoughtful, capable person is trying to sort out how, exactly, to have the best impact on humanity’s future. Whatever your views on the existential risks landscape, it’s clear humanity could use more people like that.
The part that concerned me was that Jim had put a site-blocker on LW (as well as all of his blogs) after reading Patri’s post, which, he said, had “hit him like a load of bricks”. Jim wanted to get his act together and really help the world, not diddle around reading shiny-fun blog comments. But his discussion of how to “really help the world” seemed to me to contain a number of errors[1] -- errors enough that, if he cannot sort them out somehow, his total impact won’t be nearly what it could be. And they were the sort of errors LW could have helped with. And there was no obvious force in his off-line, focused, productive life of a sort that could similarly help.
So, in case it’s useful to others, a review of what LW is useful for.
When you do (and don’t) need epistemic rationality
For some tasks, the world provides rich, inexpensive empirical feedback. In these tasks you hardly need reasoning. Just try the task many ways, steal from the best role-models you can find, and take care to notice what is and isn’t giving you results.
Thus, if you want to learn to sculpt, reading Less Wrong is a bad way to go about it. Better to find some clay and a hands-on sculpting course. The situation is similar for small talk, cooking, selling, programming, and many other useful skills.
Unfortunately, most of us also have goals for which we can obtain no such ready success/failure data. For example, if you want to know whether cryonics is a good buy, you can’t just try buying it and not-buying it and see which works better. If you miss your first bet, you’re out for good.
There is similarly no easy way to use the “try it and see” method to sort out what ethics and meta-ethics to endorse, or what long-term human outcomes are likely, how you can have a positive impact on the distant poor, or which retirement investments *really will* be safe bets for the next forty years. For these goals we are forced to use reasoning, as failure-prone as human reasoning is. If the issue is tricky enough, we’re forced to additionally develop our skill at reasoning -- to develop “epistemic rationality”.
The traditional alternative is to deem subjects on which one cannot gather empirical data "unscientific" subjects on which respectable people should not speak, or else to focus one's discussion on the most similar-seeming subject for which it *is* easy to gather empirical data (and so to, for example, rate charities as "good" when they have a low percentage of overhead, instead of a high impact). Insofar as we are stuck caring about such goals and betting our actions on various routes for their achievement, this is not much help.[2]
How to develop epistemic rationality
If you want to develop epistemic rationality, it helps to spend time with the best epistemic rationalists you can find. For many, although not all, this will mean Less Wrong. Read the sequences. Read the top current conversations. Put your own thinking out there (in the discussion section, for starters) so that others can help you find mistakes in your thinking, and so that you can get used to holding your own thinking to high standards. Find or build an in-person community of aspiring rationalists if you can.
Is it useful to try to read every single comment? Probably not, on the margin; better to read textbooks or to do rationality exercises yourself. But reading the Sequences helped many of us quite a bit; and epistemic rationality is the sort of thing for which sitting around reading (even reading things that are shiny-fun) can actually help.
[1] To be specific: Jim was considering personally "raising awareness" about the virtues of the free market, in the hopes that this would (indirectly) boost economic growth in the third world, which would enable more people to be educated, which would enable more people to help aim for a positive human future and an eventual positive singularity.
There are several difficulties with this plan. For one thing, it's complicated; in order to work, his awareness raising would need to indeed boost free market enthusiasm AND US citizens' free market enthusiasm would need to indeed increase the use of free markets in the third world AND this result would need to indeed boost welfare and education in those countries AND a world in which more people could think about humanity's future would need to indeed result in a better future. Conjunctions are unlikely, and this route didn't sound like the most direct path to Jim's stated goal.
For another thing, there are good general arguments suggesting that it is often better to donate than to work directly in a given field, and that, given the many orders of magnitude differences in efficacy between different sorts of philanthropy, it's worth doing considerable research into how best to give. (Although to be fair, Jim's emailing me was such research, and he may well have appreciated that point.)
The biggest reason it seemed Jim would benefit from LW was just manner; Jim seemed smart and well-meaning, but more verbally jumbled, and less good at factoring complex questions into distinct, analyzable pieces, than I would expect if he spent longer around LW.